Shrinkage estimation of large covariance matrices: Keep it simple, statistician?

نویسندگان

چکیده

Under rotation-equivariant decision theory, sample covariance matrix eigenvalues can be optimally shrunk by recombining eigenvectors with a (potentially nonlinear) function of the unobservable population matrix. The optimal shape this reflects loss/risk that is to minimized. We solve problem estimation under variety loss functions motivated statistical precedent, probability and differential geometry. A key ingredient our nonlinear shrinkage methodology new estimator angle between eigenvectors, without making strong assumptions on eigenvalues. also introduce broad family estimators handle all regular functional transformations large-dimensional asymptotics. In addition, we compare via Monte Carlo simulations two simpler ones from literature, linear based spiked model.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Nonlinear shrinkage estimation of large-dimensional covariance matrices

Many statistical applications require an estimate of a covariance matrix and/or its inverse. Whenthe matrix dimension is large compared to the sample size, which happens frequently, the samplecovariance matrix is known to perform poorly and may suffer from ill-conditioning. There alreadyexists an extensive literature concerning improved estimators in such situations. In the absence offurther kn...

متن کامل

Direct Nonlinear Shrinkage Estimation of Large-Dimensional Covariance Matrices

This paper introduces a nonlinear shrinkage estimator of the covariance matrix that does not require recovering the population eigenvalues first. We estimate the sample spectral density and its Hilbert transform directly by smoothing the sample eigenvalues with a variable-bandwidth kernel. Relative to numerically inverting the so-called QuEST function, the main advantages of direct kernel estim...

متن کامل

Estimation of Large Covariance Matrices

This paper considers estimating a covariance matrix of p variables from n observations by either banding or tapering the sample covariance matrix, or estimating a banded version of the inverse of the covariance. We show that these estimates are consistent in the operator norm as long as (logp)/n→ 0, and obtain explicit rates. The results are uniform over some fairly natural well-conditioned fam...

متن کامل

Penalized estimation of covariance matrices with flexible amounts of shrinkage

Penalized maximum likelihood estimation has been advocated for its capability to yield substantially improved estimates of covariance matrices, but so far only cases with equal numbers of records have been considered. We show that a generalization of the inverse Wishart distribution can be utilised to derive penalties which allow for differential penalization for different blocks of the matrice...

متن کامل

Regularized estimation of large covariance matrices

This paper considers estimating a covariance matrix of p variables from n observations by either banding or tapering the sample covariance matrix, or estimating a banded version of the inverse of the covariance. We show that these estimates are consistent in the operator norm as long as (logp)/n→ 0, and obtain explicit rates. The results are uniform over some fairly natural well-conditioned fam...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of Multivariate Analysis

سال: 2021

ISSN: ['0047-259X', '1095-7243']

DOI: https://doi.org/10.1016/j.jmva.2021.104796